82 research outputs found

    Towards affective computing that works for everyone

    Full text link
    Missing diversity, equity, and inclusion elements in affective computing datasets directly affect the accuracy and fairness of emotion recognition algorithms across different groups. A literature review reveals how affective computing systems may work differently for different groups due to, for instance, mental health conditions impacting facial expressions and speech or age-related changes in facial appearance and health. Our work analyzes existing affective computing datasets and highlights a disconcerting lack of diversity in current affective computing datasets regarding race, sex/gender, age, and (mental) health representation. By emphasizing the need for more inclusive sampling strategies and standardized documentation of demographic factors in datasets, this paper provides recommendations and calls for greater attention to inclusivity and consideration of societal consequences in affective computing research to promote ethical and accurate outcomes in this emerging field.Comment: 8 pages, 2023 11th International Conference on Affective Computing and Intelligent Interaction (ACII

    Innovation Letter: Experimenting with Competing Techno-Legal Standards for Robotics

    Get PDF
    There are legitimacy and discriminatory issues relating to overreliance on private standards to regulate new technologies. On the legitimacy plane, we see that standards shift the centralization of regulation from public democratic processes to private ones that are not subject to the rule of law guarantees reviving the discussion on balancing the legitimacy and effectiveness of techno-legal solutions, which only further aggravates this complex panorama. On the discriminatory plane, incentive issues exacerbate discriminatory outcomes over often marginalized communities. Indeed, standardization bodies do not have incentives to involve and focus on minorities and marginal groups because 'unanimity' of the voting means among those sitting at the table, and there are no accountability mechanisms to turn this around. In this letter, we put up some ideas on how to devise an institutional framework such that standardization bodies invest in anticipating and preventing harm to people's fundamental rights

    Implications of the Google’s US 8,996,429 B1 Patent in Cloud Robotics-Based Therapeutic Researches

    Get PDF
    Intended for being informative to both legal and engineer communities, this chapter raises awareness on the implications of recent patents in the field of human-robot interaction (HRI) studies. Google patented the use of cloud robotics to create robot personality(-ies). The broad claims of the patent could hamper many HRI research projects in the field. One of the possible frustrated research lines is related to robotic therapies because the personalization of the robot accelerates the process of engagement, which is extremely beneficial for robotic cognitive therapies. This chapter presents, therefore, the scientific examination, description, and comparison of the Tufts University CEEO project “Data Analysis and Collection through Robotic Companions and LEGO® Engineering with Children on the Autism Spectrum project” and the US 8,996,429 B1 Patent on the Methods and Systems for Robot Personality Development of Google. Some remarks on ethical implications of the patent will close the chapter and open the discussion to both communities

    Towards a Legal end Ethical Framework for Personal Care Robots. Analysis of Person Carrier, Physical Assistant and Mobile Servant Robots.

    Get PDF
    Technology is rapidly developing, and regulators and robot creators inevitably have to come to terms with new and unexpected scenarios. A thorough analysis of this new and continuosuly evolving reality could be useful to better understand the current situation and pave the way to the future creation of a legal and ethical framework. This is clearly a wide and complex goal, considering the variety of new technologies available today and those under development. Therefore, this thesis focuses on the evaluation of the impacts of personal care robots. In particular, it analyzes how roboticists adjust their creations to the existing regulatory framework for legal compliance purposes. By carrying out an impact assessment analysis, existing regulatory gaps and lack of regulatory clarity can be highlighted. These gaps should of course be considered further on by lawmakers for a future legal framework for personal care robot. This assessment should be made first against regulations. If the creators of the robot do not encounter any limitations, they can then proceed with its development. On the contrary, if there are some limitations, robot creators will either (1) adjust the robot to comply with the existing regulatory framework; (2) start a negotiation with the regulators to change the law; or (3) carry out the original plan and risk to be non-compliant. The regulator can discuss existing (or lacking) regulations with robot developers and give a legal response accordingly. In an ideal world, robots are clear of impacts and therefore threats can be responded in terms of prevention and opportunities in form of facilitation. In reality, the impacts of robots are often uncertain and less clear, especially when they are inserted in care applications. Therefore, regulators will have to address uncertain risks, ambiguous impacts and yet unkown effects

    Healthcare Digitalisation and the Changing Nature of Work and Society

    Get PDF
    Digital technologies have profound effects on all areas of modern life, including the workplace. Certain forms of digitalisation entail simply exchanging digital files for paper, while more complex instances involve machines performing a wide variety of tasks on behalf of humans. While some are wary of the displacement of humans that occurs when, for example, robots perform tasks previously performed by humans, others argue that robots only perform the tasks that robots should have carried out in the very first place and never by humans. Understanding the impacts of digitalisation in the workplace requires an understanding of the effects of digital technology on the tasks we perform, and these effects are often not foreseeable. In this article, the changing nature of work in the health care sector is used as a case to analyse such change and its implications on three levels: the societal (macro), organisational (meso), and individual level (micro). Analysing these transformations by using a layered approach is helpful for understanding the actual magnitude of the changes that are occurring and creates the foundation for an informed regulatory and societal response. We argue that, while artificial intelligence, big data, and robotics are revolutionary technologies, most of the changes we see involve technological substitution and not infrastructural change. Even though this undermines the assumption that these new technologies constitute a fourth industrial revolution, their effects on the micro and meso level still require both political awareness and proportional regulatory responses.publishedVersio

    Research in AI has Implications for Society: How do we Respond?

    Get PDF
    Artificial intelligence (AI) offers previously unimaginable possibilities, solving problems faster and more creatively than before, representing and inviting hope and change, but also fear and resistance. Unfortunately, while the pace of technology development and application dramatically accelerates, the understanding of its implications does not follow suit. Moreover, while mechanisms to anticipate, control, and steer AI development to prevent adverse consequences seem necessary, the current power dynamics on which society should frame such development is causing much confusion. In this article we ask whether AI advances should be restricted, modified, or adjusted based on their potential legal, ethical, societal consequences. We examine four possible arguments in favor of subjecting scientific activity to stricter ethical and political control and critically analyze them in light of the perspective that science, ethics, and politics should strive for a division of labor and balance of power rather than a conflation. We argue that the domains of science, ethics, and politics should not conflate if we are to retain the ability to adequately assess the adequate course of action in light of AI‘s implications. We do so because such conflation could lead to uncertain and questionable outcomes, such as politicized science or ethics washing, ethics constrained by corporate or scientific interests, insufficient regulation, and political activity due to a misplaced belief in industry self-regulation. As such, we argue that the different functions of science, ethics, and politics must be respected to ensure AI development serves the interests of society.publishedVersio

    Humans Forget, Machines Remember: Artificial Intelligence and the Right to Be Forgotten

    Get PDF
    To understand the Right to be Forgotten in context of artificial intelligence, it is necessary to first delve into an overview of the concepts of human and AI memory and forgetting. Our current law appears to treat human and machine memory alike – supporting a fictitious understanding of memory and forgetting that does not comport with reality. (Some authors have already highlighted the concerns on the perfect remembering.) This Article will examine the problem of AI memory and the Right to be Forgotten, using this example as a model for understanding the failures of current privacy law to reflect the realities of AI technology. First, this Article analyzes the legal background behind the Right to be Forgotten, in order to understand its potential applicability to AI, including a discussion on the antagonism between the values of privacy and transparency under current E.U. privacy law. Next, the Authors explore whether the Right to be Forgotten is practicable or beneficial in an AI/machine learning context, in order to understand whether and how the law should address the Right to Be Forgotten in a post-AI world. The Authors discuss the technical problems faced when adhering to strict interpretation of data deletion requirements under the Right to be Forgotten, ultimately concluding that it may be impossible to fulfill the legal aims of the Right to be Forgotten in artificial intelligence environments. Finally, this Article addresses the core issue at the heart of the AI and Right to be Forgotten problem: the unfortunate dearth of interdisciplinary scholarship supporting privacy law and regulation

    Promoting inclusiveness in exoskeleton robotics: Addressing challenges for pediatric access

    Get PDF
    Pediatric access to exoskeletons lags far behind that of adults. In this article, we promote inclusiveness in exoskeleton robotics by identifying and addressing challenges and barriers to pediatric access to this potentially life-changing technology. We first present available exoskeleton solutions for upper and lower limbs and note the variability in the absence of these. Next, we query the possible reasons for this variability in access, explicitly focusing on children, who constitute a categorically vulnerable population, and also stand to benefit significantly from the use of this technology at this critical point in their physical and emotional growth. We propose the use of a life-based design approach as a way to address some of the design challenges and offer insights toward a resolution regarding market viability and implementation challenges. We conclude that the development of pediatric exoskeletons that allow for and ensure access to health-enhancing technology is a crucial aspect of the responsible provision of health care to all members of society. For children, the stakes are particularly high, given that this technology, when used at a critical phase of a child’s development, not only holds out the possibility of improving the quality of life but also can improve the long-term health prospects
    corecore